Arbitrary Precision Algorithms for Computing the Matrix Cosine and its Fréchet Derivative

نویسندگان

چکیده

Existing algorithms for computing the matrix cosine are tightly coupled to a specific precision of floating-point arithmetic optimal efficiency so they do not conveniently extend an arbitrary environment. We develop algorithm that takes unit roundoff working as input, and works in precision. The employs Taylor approximation with scaling recovering it can be used Schur decomposition or decomposition-free manner. also derive framework Fréchet derivative, construct efficient evaluation scheme its derivative simultaneously precision, show how this extended compute sine, cosine, their derivatives all together. Numerical experiments new behave forward stable way over wide range precisions. transformation-free version is competitive accuracy state-of-the-art double surpasses existing alternatives both speed precisions higher than double.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Computing the Fréchet Derivative of the Matrix Logarithm and Estimating the Condition Number

The most popular method for computing the matrix logarithm is the inverse scaling and squaring method, which is the basis of the recent algorithm of [A. H. Al-Mohy and N. J. Higham, Improved inverse scaling and squaring algorithms for the matrix logarithm, SIAM J. Sci. Comput., 34 (2012), pp. C152–C169]. We show that by differentiating the latter algorithm a backward stable algorithm for comput...

متن کامل

Computing the Fréchet Derivative of the Matrix Exponential, with an Application to Condition Number Estimation

The matrix exponential is a much-studied matrix function having many applications. The Fréchet derivative of the matrix exponential describes the first-order sensitivity of eA to perturbations in A and its norm determines a condition number for eA. Among the numerous methods for computing eA the scaling and squaring method is the most widely used. We show that the implementation of the method i...

متن کامل

New Algorithms for Computing the Matrix Sine and Cosine Separately or Simultaneously

Several existing algorithms for computing the matrix cosine employ polynomial or rational approximations combined with scaling and use of a double angle formula. Their derivations are based on forward error bounds. We derive new algorithms for computing the matrix cosine, the matrix sine, and both simultaneously, that are backward stable in exact arithmetic and behave in a forward stable manner...

متن کامل

Algorithms for arbitrary precision floating point arithmetic

We present techniques which may be used to perform computations of very high accuracy using only straightforward oating point arithmetic operations of limited precision, and we prove the validity of these techniques under very general hypotheses satissed by most implementations of oating point arithmetic. To illustrate the application of these techniques, we present an algorithm which computes ...

متن کامل

Arbitrary precision real arithmetic: design and algorithms

We describe here a representation of computable real numbers and a set of algorithms for the elementary functions associated to this representation. A real number is represented as a sequence of nite B-adic numbers and for each classical function (rational, algebraic or transcendental), we describe how to produce a sequence representing the result of the application of this function to its argu...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: SIAM Journal on Matrix Analysis and Applications

سال: 2022

ISSN: ['1095-7162', '0895-4798']

DOI: https://doi.org/10.1137/21m1441043